Search
information technology and software

Computer Vision Lends Precision to Robotic Grappling
The goal of this computer vision software is to take the guesswork out of grapple operations aboard the ISS by providing a robotic arm operator with real-time pose estimation of the grapple fixtures relative to the robotic arms end effectors. To solve this Perspective-n-Point challenge, the software uses computer vision algorithms to determine alignment solutions between the position of the camera eyepoint with the position of the end effector as the borescope camera sensors are typically located several centimeters from their respective end effector grasping mechanisms.
The software includes a machine learning component that uses a trained regional Convolutional Neural Network (r-CNN) to provide the capability to analyze a live camera feed to determine ISS fixture targets a robotic arm operator can interact with on orbit. This feature is intended to increase the grappling operational range of ISSs main robotic arm from a previous maximum of 0.5 meters for certain target types, to greater than 1.5 meters, while significantly reducing computation times for grasping operations.
Industrial automation and robotics applications that rely on computer vision solutions may find value in this softwares capabilities. A wide range of emerging terrestrial robotic applications, outside of controlled environments, may also find value in the dynamic object recognition and state determination capabilities of this technology as successfully demonstrated by NASA on-orbit.
This computer vision software is at a technology readiness level (TRL) 6, (system/sub-system model or prototype demonstration in an operational environment.), and the software is now available to license. Please note that NASA does not manufacture products itself for commercial sale.
Robotics Automation and Control

Robotic System for Infra-structure Reconnaissance
The robotic system is comprised of six main components: the orb that performs the reconnaissance, an orb injector housing that attaches to a piping network, a tether and reel subsystem that attaches to the back of the injector housing, a fluid injection subsystem that attaches toward the front of the injector housing, an external power and data subsystem, and associated control and monitoring software.
Usage of the system begins with an operator attaching the injector housing, with the orb stowed inside, to a flanged gate valve belonging to the piping network of concern. Requisite power, data, and fluid subsystems are attached, and the system is energized for usage. The orb is released via the tether and reel, and a controlled fluid force is imparted on the orb to help guide it along its mission. The tether supplies power and guidance to the orb, and relays real-time data back to the operator.
The orb’s interior features a modular plug-and-play architecture which may comprise COTS instrumentation for reconnaissance or investiga-tion, LIDAR, and inertial measuring and motion sensors. This instru-mentation could be used in combination with other sub-systems such as lighting, and core and sample retrieving mechanisms. These com-ponents are supported by other onboard devices such as a CPU, power source and controller, and data transmission encoders and multiplexers.
The Robotic System for Infrastructure Reconnaissance is at TRL 8 (actual system completed and "flight qualified" through test and demonstration), and is now available for licensing. Please note that NASA does not manufacture products itself for commercial sale.